Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make extract_toplevel_blocks() Faster #9045

Closed
wants to merge 3 commits into from

Conversation

peterallenwebb
Copy link
Contributor

resolves #9037

Problem

The extract_toplevel_blocks() function was a severe bottleneck for some large-ish (>100Kb) files.

Solution

By caching string search results instead of re-search many times, we were able to accelerate performance by >500x on a file of size ~500Kb, from 15s to .039s.

Checklist

  • I have read the contributing guide and understand what's expected of me
  • I have run this code in development and it appears to resolve the stated issue
  • This PR includes tests, or tests are not required/relevant for this PR
  • This PR has no interface changes (e.g. macros, cli, logs, json artifacts, config files, adapter interface, etc) or this PR has already received feedback and approval from Product or DX
  • This PR includes type annotations for new and modified functions

Copy link
Contributor

github-actions bot commented Nov 9, 2023

Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide.

1 similar comment
Copy link
Contributor

github-actions bot commented Nov 9, 2023

Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide.

Copy link

codecov bot commented Nov 9, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (bb35b3e) 86.55% compared to head (22d4c82) 86.52%.
Report is 5 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #9045      +/-   ##
==========================================
- Coverage   86.55%   86.52%   -0.04%     
==========================================
  Files         179      179              
  Lines       26538    26555      +17     
==========================================
+ Hits        22971    22977       +6     
- Misses       3567     3578      +11     
Flag Coverage Δ
integration 83.37% <100.00%> (-0.10%) ⬇️
unit 64.81% <100.00%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
core/dbt/clients/_jinja_blocks.py 92.68% <100.00%> (+0.66%) ⬆️

... and 3 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@peterallenwebb peterallenwebb deleted the paw/top-level-blocks-acceleration branch May 16, 2024 14:14
fredriv added a commit to fredriv/dbt-common that referenced this pull request Sep 10, 2024
@fredriv
Copy link

fredriv commented Sep 10, 2024

Have replicated the changes in #9045 in a new PR for dbt-common: dbt-labs/dbt-common#189

This change reduces dbt parse for our dbt project from 2m20s to 41s on my M1 Mac.

@gshank @peterallenwebb Anything else I would need to do for the PR? My company (Oda) has already signed the CLA.

fredriv added a commit to fredriv/dbt-common that referenced this pull request Sep 24, 2024
fredriv added a commit to fredriv/dbt-common that referenced this pull request Oct 7, 2024
fredriv added a commit to fredriv/dbt-common that referenced this pull request Oct 15, 2024
@peterallenwebb
Copy link
Contributor Author

This was ultimately resolved in: dbt-labs/dbt-common#205

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CT-3361] Improve Docs Parsing Performance
4 participants